翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Alternating direction method of multipliers : ウィキペディア英語版
Augmented Lagrangian method
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is not the same as the method of Lagrange multipliers.
Viewed differently, the unconstrained objective is the Lagrangian of the constrained problem, with an additional penalty term (the augmentation).
The method was originally known as the method of multipliers, and was studied much in the 1970 and 1980s as a good alternative to penalty methods. It was first discussed by Magnus Hestenes in 1969〔M.R. Hestenes, "Multiplier and gradient methods", ''Journal of Optimization Theory and Applications'', 4, 1969, pp. 303–320〕 and by Powell in 1969.〔M.J.D. Powell, "A method for nonlinear constraints in minimization problems", in ''Optimization'' ed. by R. Fletcher, Academic Press, New York, NY, 1969, pp. 283–298.〕 The method was studied by R. Tyrrell Rockafellar in relation to Fenchel duality, particularly in relation to proximal-point methods, Moreau–Yosida regularization, and maximal monotone operators: These methods were used in structural optimization. The method was also studied by Dimitri Bertsekas, notably in his 1982 book,〔Dimitri P. Bertsekas, ''Constrained optimization and Lagrange multiplier methods'', Athena Scientific, 1996 (first published 1982)〕 together with extensions involving nonquadratic regularization functions, such as entropic regularization, which gives rise to the "exponential method of multipliers," a method that handles inequality constraints with a twice differentiable augmented Lagrangian function.
Since the 1970s, sequential quadratic programming (SQP) and interior point methods (IPM) have had increasing attention, in part because they more easily use sparse matrix subroutines from numerical software libraries, and in part because IPMs have proven complexity results via the theory of self-concordant functions. The augmented Lagrangian method was rejuvenated by the optimization systems LANCELOT and AMPL, which allowed sparse matrix techniques to be used on seemingly dense but "partially separable" problems. The method is still useful for some problems.〔, chapter 17〕
Around 2007, there was a resurgence of augmented Lagrangian methods in fields such as total-variation denoising and compressed sensing.
In particular, a variant of the standard augmented Lagrangian method that uses partial updates (similar to the Gauss-Seidel method for solving linear equations) known as the alternating direction method of multipliers or ADMM gained some attention.
== General method ==

Let us say we are solving the following constrained problem:
: \min f(\mathbf)
subject to
: c_i(\mathbf) = 0 ~\forall i \in I.
This problem can be solved as a series of unconstrained minimization problems. For reference, we first list the penalty method approach:
: \min \Phi_k (\mathbf) = f (\mathbf) + \mu_k ~ \sum_ ~ c_i(\mathbf)^2
The penalty method solves this problem, then at the next iteration it re-solves the problem
using a larger value of \mu_k (and using the old solution as the initial guess or "warm-start").
The augmented Lagrangian method uses the following unconstrained objective:
: \min \Phi_k (\mathbf) = f (\mathbf) + \frac ~ \sum_ ~ c_i(\mathbf)^2 - \sum_ ~ \lambda_i c_i(\mathbf)
and after each iteration, in addition to updating \mu_k, the variable \lambda is also updated according to the rule
:\lambda_i \leftarrow \lambda_i - \mu_k c_i(\bold_k)
where \bold_k is the solution to the unconstrained problem at the ''k''th step, i.e. \bold_k=\text \Phi_k(\mathbf)
The variable \lambda is an estimate of the Lagrange multiplier, and the accuracy of this estimate improves at every step. The major advantage of the method is that unlike the penalty method, it is not necessary to take \mu \rightarrow \infty in order to solve the original constrained problem. Instead, because of the presence of the Lagrange multiplier term, \mu can stay much smaller.
The method can be extended to handle inequality constraints. For a discussion of practical improvements, see.〔

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Augmented Lagrangian method」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.